R-squared
lapply(models_task.fitted, bayes_R2, cl = 22)
$mod_task00
Estimate Est.Error Q2.5 Q97.5
R2 0.3573123 0.0416154 0.2690994 0.4316917
$mod_task01
Estimate Est.Error Q2.5 Q97.5
R2 0.3677529 0.04018299 0.2826521 0.4403627
$mod_task02
Estimate Est.Error Q2.5 Q97.5
R2 0.3871216 0.04005345 0.3034955 0.4590253
$mod_task03
Estimate Est.Error Q2.5 Q97.5
R2 0.3999841 0.03753747 0.3212656 0.4678505
$mod_task04
Estimate Est.Error Q2.5 Q97.5
R2 0.402649 0.03705896 0.3230708 0.4696571
$mod_task05
Estimate Est.Error Q2.5 Q97.5
R2 0.4235437 0.03557053 0.3478523 0.4876535
$mod_task06
Estimate Est.Error Q2.5 Q97.5
R2 0.4255515 0.03526898 0.3507121 0.4889527
Model Diagnostocs
loos_task
Output of model 'mod_task00':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -222.8 10.6
p_loo 13.8 1.4
looic 445.6 21.3
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'mod_task01':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -221.8 11.3
p_loo 14.6 1.6
looic 443.7 22.5
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'mod_task02':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -219.6 12.0
p_loo 15.8 1.7
looic 439.1 24.0
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'mod_task03':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -220.0 12.3
p_loo 16.8 1.8
looic 440.0 24.6
------
Monte Carlo SE of elpd_loo is 0.0.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'mod_task04':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -220.1 12.4
p_loo 16.9 1.9
looic 440.2 24.8
------
Monte Carlo SE of elpd_loo is 0.1.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'mod_task05':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -216.2 12.7
p_loo 17.7 2.2
looic 432.4 25.4
------
Monte Carlo SE of elpd_loo is 0.1.
All Pareto k estimates are good (k < 0.5).
See help('pareto-k-diagnostic') for details.
Output of model 'mod_task06':
Computed from 10000 by 220 log-likelihood matrix
Estimate SE
elpd_loo -216.4 12.8
p_loo 17.9 2.2
looic 432.7 25.6
------
Monte Carlo SE of elpd_loo is 0.1.
Pareto k diagnostic values:
Count Pct. Min. n_eff
(-Inf, 0.5] (good) 219 99.5% 2575
(0.5, 0.7] (ok) 1 0.5% 1079
(0.7, 1] (bad) 0 0.0% <NA>
(1, Inf) (very bad) 0 0.0% <NA>
All Pareto k estimates are ok (k < 0.7).
See help('pareto-k-diagnostic') for details.
Model comparisons:
elpd_diff se_diff
mod_task05 0.0 0.0
mod_task06 -0.2 0.2
mod_task02 -3.4 4.2
mod_task03 -3.8 4.2
mod_task04 -3.9 4.2
mod_task01 -5.6 5.2
mod_task00 -6.6 5.8
mod_task.weights
strategy mod_task00 mod_task01 mod_task02 mod_task03 mod_task04
1 loo 0.0007045120 1.833223e-03 1.808916e-02 0.0114075587 0.010395200
2 waic 0.0006280679 1.658104e-03 1.680532e-02 0.0106377771 0.009930404
3 stacking 0.2389476558 7.881162e-07 5.671111e-05 0.0002777408 0.038071391
mod_task05 mod_task06
1 0.5159210249 0.4416493
2 0.5157972762 0.4445431
3 0.0008140229 0.7218317
Model selection based on LOO weights. Sum equals to 1. More weight given to model 6.
## Joining, by = c("strategy", "mod")

Posteior predictive check of the winning model
pp_check(models_task.fitted[6]$mod_task05, "ecdf_overlay")
## Using 10 posterior draws for ppc type 'ecdf_overlay' by default.

Pairplot for 3 parameters: red dots indicate divergent transitions in the Markov chain
posterior_model_task_5 <- as.array(models_task.fitted[6]$mod_task05)
np_task_model_5 <- nuts_params(models_task.fitted[6]$mod_task05)
mcmc_pairs(posterior_model_task_5, pars = c("b_zbv", "b_zlog.apen", "b_conditionactive_rhTMS") , np = np_task_model_5,
off_diag_args = list(size = 0.75))

Model Coefficients: lots of uncertainty around stimulation (90%-intervals)
mod=models_task.fitted[6]$mod_task05
mcmc_intervals_data(as.matrix(mod), prob_outer = 0.9 ) %>%
filter(parameter!="lp__", !str_detect(parameter, "subj")) %>%
filter(!str_detect(parameter, "Intercept")) %>%
ggplot(aes(y=m, ymin=ll,ymax=hh,x=parameter))+
geom_pointrange(position=position_dodge(width=0.2))+
coord_flip()+geom_hline(yintercept = 0, color="red")+
labs(y="Coefficient")
